Noise alters beta-band activity in superior temporal cortex during audiovisual speech processing
نویسندگان
چکیده
Speech recognition is improved when complementary visual information is available, especially under noisy acoustic conditions. Functional neuroimaging studies have suggested that the superior temporal sulcus (STS) plays an important role for this improvement. The spectrotemporal dynamics underlying audiovisual speech processing in the STS, and how these dynamics are affected by auditory noise, are not well understood. Using electroencephalography, we investigated how auditory noise affects audiovisual speech processing in event-related potentials (ERPs) and oscillatory activity. Spoken syllables were presented in audiovisual (AV) and auditory only (A) trials at three different auditory noise levels (no, low, and high). Responses to A stimuli were subtracted from responses to AV stimuli, separately for each noise level, and these responses were subjected to the statistical analysis. Central ERPs differed between the no noise and the two noise conditions from 130 to 150 ms and 170 to 210 ms after auditory stimulus onset. Source localization using the local autoregressive average procedure revealed an involvement of the lateral temporal lobe, encompassing the superior and middle temporal gyrus. Neuronal activity in the beta-band (16 to 32 Hz) was suppressed at central channels around 100 to 400 ms after auditory stimulus onset in the averaged AV minus A signal over the three noise levels. This suppression was smaller in the high noise compared to the no noise and low noise condition, possibly reflecting disturbed recognition or altered processing of multisensory speech stimuli. Source analysis of the beta-band effect using linear beamforming demonstrated an involvement of the STS. Our study shows that auditory noise alters audiovisual speech processing in ERPs localized to lateral temporal lobe and provides evidence that beta-band activity in the STS plays a role for audiovisual speech processing under regular and noisy acoustic conditions.
منابع مشابه
An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory i...
متن کاملEarly and late beta-band power reflect audiovisual perception in the McGurk illusion.
The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the ...
متن کاملPrediction and constraint in audiovisual speech perception.
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of art...
متن کاملComprehension of audiovisual speech: Data-based sorting of independent components of fMRI activity
To find out whether decreased intelligibility of natural audiovisual speech would enhance the involvement of the sensorimotor speech comprehension network, we let ten adults to listen and view a video, in which the intelligibility of continuous audio-visual speech was varied by altering the loudness between well-audible, just audible (–18 dB weaker than loud), and mute. Occasionally the voice a...
متن کاملElectrocorticography Reveals Enhanced Visual Cortex Responses to Visual Speech.
Human speech contains both auditory and visual components, processed by their respective sensory cortices. We test a simple model in which task-relevant speech information is enhanced during cortical processing. Visual speech is most important when the auditory component is uninformative. Therefore, the model predicts that visual cortex responses should be enhanced to visual-only (V) speech com...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- NeuroImage
دوره 70 شماره
صفحات -
تاریخ انتشار 2013